5 research outputs found

    Robot-in-the-loop: Prototyping robotic digitisation at the Natural History Museum

    Get PDF
    The Natural History Museum, London (NHM) is home to an impressive collection of over 80 million specimens, of which just 5.5 million have been digitised. Like all similar collections, digitisation of these specimens is very labour intensive, requiring time-consuming manual handling. Each specimen is extracted from its curatorial unit, placed for imaging, labels are manually manipulated, and then returned to storage. Thanks to the NHM's team of digitisers, workflows are becoming more efficient as they are refined. However, many of these workflows are highly repetitive and ideally suited to automation. The museum is now exploring integrating robots into the digitisation process.The NHM has purchased a Techman TM5 900 robotic arm, equipped with integrated Artificial Intelligence (AI) software and additional features such as custom grippers and a 3D scanner. This robotic arm combines advanced imaging technologies, machine learning algorithms, and robotic manipulation capabilities to capture high-quality specimen data, making it possible to digitise vast collections efficiently (Fig. 1).We showcase the NHM's application of robotics for digitisation, outlining the use cases developed for implementation and the prototypical workflows already in place at the museum. We will explore our invasive and non-invasive digitisation experiments, the many challenges, and the initial results of our early experiments with this transformative technology

    Classifying organisms and artefacts by their outline shapes

    Get PDF
    We often wish to classify objects by their shapes. Indeed, the study of shapes is an important part of many scientific fields, such as evolutionary biology, structural biology, image processing and archaeology. However, mathematical shape spaces are rather complicated and nonlinear. The most widely used methods of shape analysis, geometric morphometrics, treat the shapes as sets of points. Diffeomorphic methods consider the underlying curve rather than points, but have rarely been applied to real-world problems. Using a machine classifier, we tested the ability of several of these methods to describe and classify the shapes of a variety of organic and man-made objects. We find that one method, based on square-root velocity functions (SRVFs), outperforms all others, including a standard geometric morphometric method (eigenshapes), and that it is also superior to human experts using shape alone. When the SRVF approach is constrained to take account of homologous landmarks it can accurately classify objects of very different shapes. The SRVF method identifies a shortest path between shapes, and we show that this can be used to estimate the shapes of intermediate steps in evolutionary series. Diffeomorphic shape analysis methods, we conclude, now provide practical and effective solutions to many shape description and classification problems in the natural and human sciences.</p

    ALICE Software: Machine learning & computer vision for automatic label extraction

    No full text
    Insects make up over 70% of the world's known species (Resh and Carde 2009). This is well represented in collections across the world, with the Natural History Museum's pinned insect collection alone making up nearly 37% of the museum's remarkable 80 million specimen collection. Thus, this extraordinary dataset is a major focus of digitisation efforts here at the Museum. While hardware developments have seen digitisation processes significantly improve and speed up (Blagoderov et al. 2017), we now concentrate on the latest software and explore whether machine learning can lend a bigger hand in accelerating our digitisation of pinned insects.Traditionally, the digitisation of pinned specimens involves the removal of labels (as well as any supplementary specimen miscellanies) prior to photographing the specimen. In order to document labels, this process is typically followed by additional photographs of labels as the label documentation is often obstructed by their stacking on a pin, the specimen and additional specimen material, or the pin itself. However, these steps not only slow down the process of digitisation but also increase the risk of specimen damage. This encouraged the team at the Natural History Museum to develop a novel setup that would bypass the need for removing labels during digitisation. This led to the development of ALICE (Angled Label Image Capture and Extraction) (Dupont and Price 2019).ALICE is a multi-camera setup designed to capture images of angled specimens, which allows users to get a full picture of a specimen in a collection, including that of the label and the text within. Specifically, ALICE involves four cameras angled at different viewpoints in order to capture label information, as well as two additional cameras providing a lateral and dorsal view of the specimen. By viewing all the images taken from one specimen simultaneously, we can obtain a full account of the labels and of the specimen, despite any obstructions. This setup notably accelerates parts of the digitisation process, sometimes by up to 7 times (Price et al. 2019). Furthermore, ALICE presents the opportunity to incorporate machine learning and computer vision techniques to create a software that automates the process of transcribing the information contained on labels.Automatically transcribing text (whether typed or handwritten) from label images, leads to the topic of Optical Character Recognition (OCR). Regardless of any obstructions to the labels, standard OCR methods will often fail at detecting text from these angled specimens if no preprocessing is done. This was emphasised in Bieniecki et al. (2007), which showed that some standard OCR algorithms were highly sensitive to geometric distortions such as bad angles. Therefore, in our latest ALICE software, we take on a 5-step approach that segments labels and merges them together using machine learning and computer vision, before turning to standard OCR tools to transcribe texts. Our 5-step framework is described as follows:Label segmentation with Convolutional Neural Networks (CNNs),Label corner approximation,Perspective transformation of label, followed by image registration with a given template, Label-image merging using an averaging technique, OCR on merged label.While ALICE aims to reveal specimen labels using a multi-camera setup, we ask ourselves whether an alternative approach can also be taken. This takes us to the next phase of our digitisation acceleration research on smarter cameras with cobots (collaborative robots) and the associated software. We explore the potential of a single camera setup that is capable of zooming into labels. Where intelligence was incorporated post-processing with ALICE, using cobots, we can incorporate machine learning and computer vision techniques in-situ, in order to extract label information. This all forms the focus of our presentation

    AI-Accelerated Digitisation of Insect Collections: The next generation of Angled Label Image Capture Equipment (ALICE)

    No full text
    The digitisation of natural science specimens is a shared ambition of many of the largest collections, but the scale of these collections, estimated at at least 1.1 billion specimens (Johnson et al. 2023), continues to challenge even the most resource-rich organisations.The Natural History Museum, London (NHM) has been pioneering work to accelerate the digitisation of its 80 million specimens. Since the inception of the NHM Digital Collection Programme in 2014, more than 5.5 million specimen records have been made digitally accessible. This has enabled the museum to deliver a tenfold increase in digitisation, compared to when rates were first measured by the NHM in 2008. Even with this investment, it will take circa 150 years to digitise its remaining collections, leading the museum to pursue technology-led solutions alongside increased funding to deliver the next increase in digitisation rate. Insects comprise approximately half of all described species and, at the NHM, represent more than one-third (c. 30 million specimens) of the NHM's overall collection. Their most common preservation method, attached to a pin alongside a series of labels with metadata, makes insect specimens challenging to digitise. Early Artificial Intelligence (AI)-led innovations (Price et al. 2018) resulted in the development of ALICE, the museum's Angled Label Image Capture Equipment, in which a pinned specimen is placed inside a multi-camera setup, which captures a series of partial views of a specimen and its labels. Centred around the pin, these images can be digitally combined and reconstructed, using the accompanying ALICE software, to provide a clean image of each label. To do this, a Convolutional Neural Network (CNN) model is incorporated, to locate all labels within the images. This is followed by various image processing tools to transform the labels into a two-dimensional viewpoint, align the associated label images together, and merge them into one label. This allows users to manually, or computationally (e.g., using Optical Character Recognition [OCR] tools) extract label data from the processed label images (Salili-James et al. 2022). With the ALICE setup, a user might average imaging 800 digitised specimens per day, and exceptionally, up to 1,300. This compares with an average of 250 specimens or fewer daily, using more traditional methods involving separating the labels and photographing them off of the pin. Despite this, our original version of ALICE was only suited to a small subset of the collection. In situations when the specimen is very large, there are too many labels, or these labels are too close together, ALICE fails (Dupont and Price 2019).Using a combination of updated AI processing tools, we hereby present ALICE version 2. This new version of ALICE provides faster rates, improved software accuracy, and a more streamlined pipeline. It includes the following updates:Hardware: after conducting various tests, we have optimised the camera setup. Further hardware updates include a Light-Emitting Diode (LED) ring light, as well as modifications to the camera mounting.Software: our latest software incorporates machine learning and other computer vision tools to segment labels from ALICE images and stitch them together more quickly and with a higher level of accuracy, significantly reducing the image processing failure rate. These processed label images can be combined with the latest OCR tools for automatic transcription and data segmentation.Buildkit: we aim to provide a toolkit that any individual or institution can incorporate into their digitisation pipeline. This includes hardware instructions, an extensive guide detailing the pipeline, and new software code accessible via Github.We provide test data and workflows to demonstrate the potential of ALICE version 2 as an effective, accessible, and cost-saving solution to digitising pinned insect specimens. We also describe potential modifications, enabling it to work with other types of specimens
    corecore